The search functionality is under construction.

Keyword Search Result

[Keyword] machine learning(172hit)

81-100hit(172hit)

  • A Release-Aware Bug Triaging Method Considering Developers' Bug-Fixing Loads

    Yutaro KASHIWA  Masao OHIRA  

     
    PAPER-Software Engineering

      Pubricized:
    2019/10/25
      Vol:
    E103-D No:2
      Page(s):
    348-362

    This paper proposes a release-aware bug triaging method that aims to increase the number of bugs that developers can fix by the next release date during open-source software development. A variety of methods have been proposed for recommending appropriate developers for particular bug-fixing tasks, but since these approaches only consider the developers' ability to fix the bug, they tend to assign many of the bugs to a small number of the project's developers. Since projects generally have a release schedule, even excellent developers cannot fix all the bugs that are assigned to them by the existing methods. The proposed method places an upper limit on the number of tasks which are assigned to each developer during a given period, in addition to considering the ability of developers. Our method regards the bug assignment problem as a multiple knapsack problem, finding the best combination of bugs and developers. The best combination is one that maximizes the efficiency of the project, while meeting the constraint where it can only assign as many bugs as the developers can fix during a given period. We conduct the case study, applying our method to bug reports from Mozilla Firefox, Eclipse Platform and GNU compiler collection (GCC). We find that our method has the following properties: (1) it can prevent the bug-fixing load from being concentrated on a small number of developers; (2) compared with the existing methods, the proposed method can assign a more appropriate amount of bugs that each developer can fix by the next release date; (3) it can reduce the time taken to fix bugs by 35%-41%, compared with manual bug triaging;

  • Formal Verification of a Decision-Tree Ensemble Model and Detection of Its Violation Ranges

    Naoto SATO  Hironobu KURUMA  Yuichiroh NAKAGAWA  Hideto OGAWA  

     
    PAPER-Dependable Computing

      Pubricized:
    2019/11/20
      Vol:
    E103-D No:2
      Page(s):
    363-378

    As one type of machine-learning model, a “decision-tree ensemble model” (DTEM) is represented by a set of decision trees. A DTEM is mainly known to be valid for structured data; however, like other machine-learning models, it is difficult to train so that it returns the correct output value (called “prediction value”) for any input value (called “attribute value”). Accordingly, when a DTEM is used in regard to a system that requires reliability, it is important to comprehensively detect attribute values that lead to malfunctions of a system (failures) during development and take appropriate countermeasures. One conceivable solution is to install an input filter that controls the input to the DTEM and to use separate software to process attribute values that may lead to failures. To develop the input filter, it is necessary to specify the filtering condition for the attribute value that leads to the malfunction of the system. In consideration of that necessity, we propose a method for formally verifying a DTEM and, according to the result of the verification, if an attribute value leading to a failure is found, extracting the range in which such an attribute value exists. The proposed method can comprehensively extract the range in which the attribute value leading to the failure exists; therefore, by creating an input filter based on that range, it is possible to prevent the failure. To demonstrate the feasibility of the proposed method, we performed a case study using a dataset of house prices. Through the case study, we also evaluated its scalability and it is shown that the number and depth of decision trees are important factors that determines the applicability of the proposed method.

  • Knowledge Discovery from Layered Neural Networks Based on Non-negative Task Matrix Decomposition

    Chihiro WATANABE  Kaoru HIRAMATSU  Kunio KASHINO  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2019/10/23
      Vol:
    E103-D No:2
      Page(s):
    390-397

    Interpretability has become an important issue in the machine learning field, along with the success of layered neural networks in various practical tasks. Since a trained layered neural network consists of a complex nonlinear relationship between large number of parameters, we failed to understand how they could achieve input-output mappings with a given data set. In this paper, we propose the non-negative task matrix decomposition method, which applies non-negative matrix factorization to a trained layered neural network. This enables us to decompose the inference mechanism of a trained layered neural network into multiple principal tasks of input-output mapping, and reveal the roles of hidden units in terms of their contribution to each principal task.

  • Android Malware Detection Scheme Based on Level of SSL Server Certificate

    Hiroya KATO  Shuichiro HARUTA  Iwao SASASE  

     
    PAPER-Dependable Computing

      Pubricized:
    2019/10/30
      Vol:
    E103-D No:2
      Page(s):
    379-389

    Detecting Android malwares is imperative. As a promising Android malware detection scheme, we focus on the scheme leveraging the differences of traffic patterns between benign apps and malwares. Those differences can be captured even if the packet is encrypted. However, since such features are just statistic based ones, they cannot identify whether each traffic is malicious. Thus, it is necessary to design the scheme which is applicable to encrypted traffic data and supports identification of malicious traffic. In this paper, we propose an Android malware detection scheme based on level of SSL server certificate. Attackers tend to use an untrusted certificate to encrypt malicious payloads in many cases because passing rigorous examination is required to get a trusted certificate. Thus, we utilize SSL server certificate based features for detection since their certificates tend to be untrusted. Furthermore, in order to obtain the more exact features, we introduce required permission based weight values because malwares inevitably require permissions regarding malicious actions. By computer simulation with real dataset, we show our scheme achieves an accuracy of 92.7%. True positive rate and false positive rate are 5.6% higher and 3.2% lower than the previous scheme, respectively. Our scheme can cope with encrypted malicious payloads and 89 malwares which are not detected by the previous scheme.

  • On the Detection of Malicious Behaviors against Introspection Using Hardware Architectural Events

    Huaizhe ZHOU  Haihe BA  Yongjun WANG  Tie HONG  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2019/10/09
      Vol:
    E103-D No:1
      Page(s):
    177-180

    The arms race between offense and defense in the cloud impels the innovation of techniques for monitoring attacks and unauthorized activities. The promising technique of virtual machine introspection (VMI) becomes prevalent for its tamper-resistant capability. However, some elaborate exploitations are capable of invalidating VMI-based tools by breaking the assumption of a trusted guest kernel. To achieve a more reliable and robust introspection, we introduce a practical approach to monitor and detect attacks that attempt to subvert VMI in this paper. Our approach combines supervised machine learning and hardware architectural events to identify those malicious behaviors which are targeted at VMI techniques. To demonstrate the feasibility, we implement a prototype named HyperMon on the Xen hypervisor. The results of our evaluation show the effectiveness of HyperMon in detecting malicious behaviors with an average accuracy of 90.51% (AUC).

  • Understanding Developer Commenting in Code Reviews

    Toshiki HIRAO  Raula GAIKOVINA KULA  Akinori IHARA  Kenichi MATSUMOTO  

     
    PAPER

      Pubricized:
    2019/09/11
      Vol:
    E102-D No:12
      Page(s):
    2423-2432

    Modern code review is a well-known practice to assess the quality of software where developers discuss the quality in a web-based review tool. However, this lightweight approach may risk an inefficient review participation, especially when comments becomes either excessive (i.e., too many) or underwhelming (i.e., too few). In this study, we investigate the phenomena of reviewer commenting. Through a large-scale empirical analysis of over 1.1 million reviews from five OSS systems, we conduct an exploratory study to investigate the frequency, size, and evolution of reviewer commenting. Moreover, we also conduct a modeling study to understand the most important features that potentially drive reviewer comments. Our results find that (i) the number of comments and the number of words in the comments tend to vary among reviews and across studied systems; (ii) reviewers change their behaviours in commenting over time; and (iii) human experience and patch property aspects impact the number of comments and the number of words in the comments.

  • Improved Majority Filtering Algorithm for Cleaning Class Label Noise in Supervised Learning

    Muhammad Ammar MALIK  Jae Young CHOI  Moonsoo KANG  Bumshik LEE  

     
    LETTER-Digital Signal Processing

      Vol:
    E102-A No:11
      Page(s):
    1556-1559

    In most supervised learning problems, the labelling quality of datasets plays a paramount role in the learning of high-performance classifiers. The performance of a classifier can significantly be degraded if it is trained with mislabeled data. Therefore, identification of such examples from the dataset is of critical importance. In this study, we proposed an improved majority filtering algorithm, which utilized the ability of a support vector machine in terms of capturing potentially mislabeled examples as support vectors (SVs). The key technical contribution of our work, is that the base (or component) classifiers that construct the ensemble of classifiers are trained using non-SV examples, although at the time of testing, the examples captured as SVs were employed. An example can be tagged as mislabeled if the majority of the base classifiers incorrectly classifies the example. Experimental results confirmed that our algorithm not only showed high-level accuracy with higher F1 scores, for identifying the mislabeled examples, but was also significantly faster than the previous methods.

  • Learning-Based, Distributed Spectrum Observation System for Dynamic Spectrum Sharing in the 5G Era and Beyond

    Masaki KITSUNEZUKA  Kenta TSUKAMOTO  Jun SAKAI  Taichi OHTSUJI  Kazuaki KUNIHIRO  

     
    PAPER

      Pubricized:
    2019/02/20
      Vol:
    E102-B No:8
      Page(s):
    1526-1537

    Dynamic sharing of limited radio spectrum resources is expected to satisfy the increasing demand for spectrum resources in the upcoming 5th generation mobile communication system (5G) era and beyond. Distributed real-time spectrum sensing is a key enabler of dynamic spectrum sharing, but the costs incurred in observed-data transmission are a critical problem, especially when massive numbers of spectrum sensors are deployed. To cope with this issue, the proposed spectrum sensors learn the ambient radio environment in real-time and create a time-spectral model whose parameters are shared with servers operating in the edge-computing layer. This process makes it possible to significantly reduce the communication cost of the sensors because frequent data transmission is no longer needed while enabling the edge servers to keep up on the current status of the radio environment. On the basis of the created time-spectral model, sharable spectrum resources are dynamically harvested and allocated in terms of geospatial, temporal, and frequency-spectral domains when accepting an application for secondary-spectrum use. A web-based prototype spectrum management system has been implemented using ten servers and dozens of sensors. Measured results show that the proposed approach can reduce data traffic between the sensors and servers by 97%, achieving an average data rate of 10 kilobits per second (kbps). In addition, the basic operation flow of the prototype has been verified through a field experiment conducted at a manufacturing facility and a proof-of-concept experiment of dynamic-spectrum sharing using wireless local-area-network equipment.

  • Clustering Malicious DNS Queries for Blacklist-Based Detection

    Akihiro SATOH  Yutaka NAKAMURA  Daiki NOBAYASHI  Kazuto SASAI  Gen KITAGATA  Takeshi IKENAGA  

     
    LETTER-Information Network

      Pubricized:
    2019/04/05
      Vol:
    E102-D No:7
      Page(s):
    1404-1407

    Some of the most serious threats to network security involve malware. One common way to detect malware-infected machines in a network is by monitoring communications based on blacklists. However, such detection is problematic because (1) no blacklist is completely reliable, and (2) blacklists do not provide the sufficient evidence to allow administrators to determine the validity and accuracy of the detection results. In this paper, we propose a malicious DNS query clustering approach for blacklist-based detection. Unlike conventional classification, our cause-based classification can efficiently analyze malware communications, allowing infected machines in the network to be addressed swiftly.

  • Weber Centralized Binary Fusion Descriptor for Fingerprint Liveness Detection

    Asera WAYNE ASERA  Masayoshi ARITSUGI  

     
    LETTER-Pattern Recognition

      Pubricized:
    2019/04/17
      Vol:
    E102-D No:7
      Page(s):
    1422-1425

    In this research, we propose a novel method to determine fingerprint liveness to improve the discriminative behavior and classification accuracy of the combined features. This approach detects if a fingerprint is from a live or fake source. In this approach, fingerprint images are analyzed in the differential excitation (DE) component and the centralized binary pattern (CBP) component, which yield the DE image and CBP image, respectively. The images obtained are used to generate a two-dimensional histogram that is subsequently used as a feature vector. To decide if a fingerprint image is from a live or fake source, the feature vector is processed using support vector machine (SVM) classifiers. To evaluate the performance of the proposed method and compare it to existing approaches, we conducted experiments using the datasets from the 2011 and 2015 Liveness Detection Competition (LivDet), collected from four sensors. The results show that the proposed method gave comparable or even better results and further prove that methods derived from combination of features provide a better performance than existing methods.

  • Methods for Adaptive Video Streaming and Picture Quality Assessment to Improve QoS/QoE Performances Open Access

    Kenji KANAI  Bo WEI  Zhengxue CHENG  Masaru TAKEUCHI  Jiro KATTO  

     
    INVITED PAPER

      Pubricized:
    2019/01/22
      Vol:
    E102-B No:7
      Page(s):
    1240-1247

    This paper introduces recent trends in video streaming and four methods proposed by the authors for video streaming. Video traffic dominates the Internet as seen in current trends, and new visual contents such as UHD and 360-degree movies are being delivered. MPEG-DASH has become popular for adaptive video streaming, and machine learning techniques are being introduced in several parts of video streaming. Along with these research trends, the authors also tried four methods: route navigation, throughput prediction, image quality assessment, and perceptual video streaming. These methods contribute to improving QoS/QoE performance and reducing power consumption and storage size.

  • Medical Healthcare Network Platform and Big Data Analysis Based on Integrated ICT and Data Science with Regulatory Science Open Access

    Ryuji KOHNO  Takumi KOBAYASHI  Chika SUGIMOTO  Yukihiro KINJO  Matti HÄMÄLÄINEN  Jari IINATTI  

     
    INVITED PAPER

      Pubricized:
    2018/12/19
      Vol:
    E102-B No:6
      Page(s):
    1078-1087

    This paper provides perspectives for future medical healthcare social services and businesses that integrate advanced information and communication technology (ICT) and data science. First, we propose a universal medical healthcare platform that consists of wireless body area network (BAN), cloud network and edge computer, big data mining server and repository with machine learning. Technical aspects of the platform are discussed, including the requirements of reliability, safety and security, i.e., so-called dependability. In addition, novel technologies for satisfying the requirements are introduced. Then primary uses of the platform for personalized medicine and regulatory compliance, and its secondary uses for commercial business and sustainable operation are discussed. We are aiming at operate the universal medical healthcare platform, which is based on the principle of regulatory science, regionally and globally. In this paper, trials carried out in Kanagawa, Japan and Oulu, Finland will be revealed to illustrate a future medical healthcare social infrastructure by expanding it to Asia-Pacific, Europe and the rest of the world. We are representing the activities of Kanagawa medical device regulatory science center and a joint proposal on security in the dependable medical healthcare platform. Novel schemes of ubiquitous rehabilitation based on analyses of the training effect by remote monitoring of activities and machine learning of patient's electrocardiography (ECG) with a neural network are proposed and briefly investigated.

  • A Sequential Classifiers Combination Method to Reduce False Negative for Intrusion Detection System

    Sornxayya PHETLASY  Satoshi OHZAHATA  Celimuge WU  Toshihito KATO  

     
    PAPER

      Pubricized:
    2019/02/27
      Vol:
    E102-D No:5
      Page(s):
    888-897

    Intrusion detection system (IDS) is a device or software to monitor a network system for malicious activity. In terms of detection results, there could be two types of false, namely, the false positive (FP) which incorrectly detects normal traffic as abnormal, and the false negative (FN) which incorrectly judges malicious traffic as normal. To protect the network system, we expect that FN should be minimized as low as possible. However, since there is a trade-off between FP and FN when IDS detects malicious traffic, it is difficult to reduce the both metrics simultaneously. In this paper, we propose a sequential classifiers combination method to reduce the effect of the trade-off. The single classifier suffers a high FN rate in general, therefore additional classifiers are sequentially combined in order to detect more positives (reduce more FN). Since each classifier can reduce FN and does not generate much FP in our approach, we can achieve a reduction of FN at the final output. In evaluations, we use NSL-KDD dataset, which is an updated version of KDD Cup'99 dataset. WEKA is utilized as a classification tool in experiment, and the results show that the proposed approach can reduce FN while improving the sensitivity and accuracy.

  • GUINNESS: A GUI Based Binarized Deep Neural Network Framework for Software Programmers

    Hiroki NAKAHARA  Haruyoshi YONEKAWA  Tomoya FUJII  Masayuki SHIMODA  Shimpei SATO  

     
    PAPER-Design Tools

      Pubricized:
    2019/02/27
      Vol:
    E102-D No:5
      Page(s):
    1003-1011

    The GUINNESS (GUI based binarized neural network synthesizer) is an open-source tool flow for a binarized deep neural network toward FPGA implementation based on the GUI including both the training on the GPU and inference on the FPGA. Since all the operation is done on the GUI, the software designer is not necessary to write any scripts to design the neural network structure, training behavior, only specify the values for hyperparameters. After finishing the training, it automatically generates C++ codes to synthesis the bit-stream using the Xilinx SDSoC system design tool flow. Thus, our tool flow is suitable for the software programmers who are not familiar with the FPGA design. In our tool flow, we modify the training algorithms both the training and the inference for a binarized CNN hardware. Since the hardware has a limited number of bit precision, it lacks minimal bias in training. Also, for the inference on the hardware, the conventional batch normalization technique requires additional hardware. Our modifications solve these problems. We implemented the VGG-11 benchmark CNN on the Digilent Inc. Zedboard. Compared with the conventional binarized implementations on an FPGA, the classification accuracy was almost the same, the performance per power efficiency is 5.1 times better, as for the performance per area efficiency, it is 8.0 times better, and as for the performance per memory, it is 8.2 times better. We compare the proposed FPGA design with the CPU and the GPU designs. Compared with the ARM Cortex-A57, it was 1776.3 times faster, it dissipated 3.0 times lower power, and its performance per power efficiency was 5706.3 times better. Also, compared with the Maxwell GPU, it was 11.5 times faster, it dissipated 7.3 times lower power, and its performance per power efficiency was 83.0 times better. The disadvantage of our FPGA based design requires additional time to synthesize the FPGA executable codes. From the experiment, it consumed more three hours, and the total FPGA design took 75 hours. Since the training of the CNN is dominant, it is considerable.

  • AI@ntiPhish — Machine Learning Mechanisms for Cyber-Phishing Attack

    Yu-Hung CHEN  Jiann-Liang CHEN  

     
    INVITED PAPER

      Pubricized:
    2019/02/18
      Vol:
    E102-D No:5
      Page(s):
    878-887

    This study proposes a novel machine learning architecture and various learning algorithms to build-in anti-phishing services for avoiding cyber-phishing attack. For the rapid develop of information technology, hackers engage in cyber-phishing attack to steal important personal information, which draws information security concerns. The prevention of phishing website involves in various aspect, for example, user training, public awareness, fraudulent phishing, etc. However, recent phishing research has mainly focused on preventing fraudulent phishing and relied on manual identification that is inefficient for real-time detection systems. In this study, we used methods such as ANOVA, X2, and information gain to evaluate features. Then, we filtered out the unrelated features and obtained the top 28 most related features as the features to use for the training and evaluation of traditional machine learning algorithms, such as Support Vector Machine (SVM) with linear or rbf kernels, Logistic Regression (LR), Decision tree, and K-Nearest Neighbor (KNN). This research also evaluated the above algorithms with the ensemble learning concept by combining multiple classifiers, such as Adaboost, bagging, and voting. Finally, the eXtreme Gradient Boosting (XGBoost) model exhibited the best performance of 99.2%, among the algorithms considered in this study.

  • A Highly Accurate Transportation Mode Recognition Using Mobile Communication Quality

    Wataru KAWAKAMI  Kenji KANAI  Bo WEI  Jiro KATTO  

     
    PAPER

      Pubricized:
    2018/10/15
      Vol:
    E102-B No:4
      Page(s):
    741-750

    To recognize transportation modes without any additional sensor devices, we demonstrate that the transportation modes can be recognized from communication quality factors. In the demonstration, instead of using global positioning system (GPS) and accelerometer sensors, we collect mobile TCP throughputs, received-signal strength indicators (RSSIs), and cellular base-station IDs (Cell IDs) through in-line network measurement when the user enjoys mobile services, such as video streaming. In accuracy evaluations, we conduct two different field experiments to collect the data in six typical transportation modes (static, walking, riding a bicycle, riding a bus, riding a train and riding a subway), and then construct the classifiers by applying a support-vector machine (SVM), k-nearest neighbor (k-NN), random forest (RF), and convolutional neural network (CNN). Our results show that these transportation modes can be recognized with high accuracy by using communication quality factors as well as the use of accelerometer sensors.

  • Quantum Algorithm on Logistic Regression Problem

    Jun Suk KIM  Chang Wook AHN  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2019/01/28
      Vol:
    E102-D No:4
      Page(s):
    856-858

    We examine the feasibility of Deutsch-Jozsa Algorithm, a basic quantum algorithm, on a machine learning-based logistic regression problem. Its major property to distinguish the function type with an exponential speedup can help identify the feature unsuitability much more quickly. Although strict conditions and restrictions to abide exist, we reconfirm the quantum superiority in many aspects of modern computing.

  • Proactive Failure Detection Learning Generation Patterns of Large-Scale Network Logs

    Tatsuaki KIMURA  Akio WATANABE  Tsuyoshi TOYONO  Keisuke ISHIBASHI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2018/08/13
      Vol:
    E102-B No:2
      Page(s):
    306-316

    Recent carrier-grade networks use many network elements (switches, routers) and servers for various network-based services (e.g., video on demand, online gaming) that demand higher quality and better reliability. Network log data generated from these elements, such as router syslogs, are rich sources for quickly detecting the signs of critical failures to maintain service quality. However, log data contain a large number of text messages written in an unstructured format and contain various types of network events (e.g., operator's login, link down); thus, genuinely important log messages for network operation are difficult to find automatically. We propose a proactive failure-detection system for large-scale networks. It automatically finds abnormal patterns of log messages from a massive amount of data without requiring previous knowledge of data formats used and can detect critical failures before they occur. To handle unstructured log messages, the system has an online log-template-extraction part for automatically extracting the format of a log message. After template extraction, the system associates critical failures with the log data that appeared before them on the basis of supervised machine learning. By associating each log message with a log template, we can characterize the generation patterns of log messages, such as burstiness, not just the keywords in log messages (e.g. ERROR, FAIL). We used real log data collected from a large production network to validate our system and evaluated the system in detecting signs of actual failures of network equipment through a case study.

  • Improvement of Anomaly Detection Performance Using Packet Flow Regularity in Industrial Control Networks Open Access

    Kensuke TAMURA  Kanta MATSUURA  

     
    PAPER

      Vol:
    E102-A No:1
      Page(s):
    65-73

    Since cyber attacks such as cyberterrorism against Industrial Control Systems (ICSs) and cyber espionage against companies managing them have increased, the techniques to detect anomalies in early stages are required. To achieve the purpose, several studies have developed anomaly detection methods for ICSs. In particular, some techniques using packet flow regularity in industrial control networks have achieved high-accuracy detection of attacks disrupting the regularity, i.e. normal behaviour, of ICSs. However, these methods cannot identify scanning attacks employed in cyber espionage because the probing packets assimilate into a number of normal ones. For example, the malware called Havex is customised to clandestinely acquire information from targeting ICSs using general request packets. The techniques to detect such scanning attacks using widespread packets await further investigation. Therefore, the goal of this study was to examine high performance methods to identify anomalies even if elaborate packets to avoid alert systems were employed for attacks against industrial control networks. In this paper, a novel detection model for anomalous packets concealing behind normal traffic in industrial control networks was proposed. For the proposal of the sophisticated detection method, we took particular note of packet flow regularity and employed the Markov-chain model to detect anomalies. Moreover, we regarded not only original packets but similar ones to them as normal packets to reduce false alerts because it was indicated that an anomaly detection model using the Markov-chain suffers from the ample false positives affected by a number of normal, irregular packets, namely noise. To calculate the similarity between packets based on the packet flow regularity, a vector representation tool called word2vec was employed. Whilst word2vec is utilised for the culculation of word similarity in natural language processing tasks, we applied the technique to packets in ICSs to calculate packet similarity. As a result, the Markov-chain with word2vec model identified scanning packets assimulating into normal packets in higher performance than the conventional Markov-chain model. In conclusion, employing both packet flow regularity and packet similarity in industrial control networks contributes to improving the performance of anomaly detection in ICSs.

  • Selecting Orientation-Insensitive Features for Activity Recognition from Accelerometers

    Yasser MOHAMMAD  Kazunori MATSUMOTO  Keiichiro HOASHI  

     
    PAPER-Information Network

      Pubricized:
    2018/10/05
      Vol:
    E102-D No:1
      Page(s):
    104-115

    Activity recognition from sensors is a classification problem over time-series data. Some research in the area utilize time and frequency domain handcrafted features that differ between datasets. Another categorically different approach is to use deep learning methods for feature learning. This paper explores a middle ground in which an off-the-shelf feature extractor is used to generate a large number of candidate time-domain features followed by a feature selector that was designed to reduce the bias toward specific classification techniques. Moreover, this paper advocates the use of features that are mostly insensitive to sensor orientation and show their applicability to the activity recognition problem. The proposed approach is evaluated using six different publicly available datasets collected under various conditions using different experimental protocols and shows comparable or higher accuracy than state-of-the-art methods on most datasets but usually using an order of magnitude fewer features.

81-100hit(172hit)